Skip to main content

The ‘green shoots’ of AI use in humanitarian action

Wednesday 15 – Friday 17 May 2024 I WP3368

Female,Doctor,Weighting,Cute,Baby,In,Clinic.,Aleppo,,Syria,October

AI has the potential to allow humanitarians to do more work, more effectively, with far fewer resources.  However, new technologies have previously been mistaken as a panacea, often resulting in false dawns and wasted resources. Participants discussed why AI was different to other technologies that have promised to transform humanitarian delivery; how AI is already being used in humanitarian action; what are the characteristics of existing good practice; and how can this help us understand where AI tools are likely to be most effective.

A key message from the discussion was that although it is positive to experiment with generative AI to potentially alleviate human suffering, it is crucial to experiment responsibly and document and share learning openly about successes, failures, and challenges. Several current use cases were discussed.

Community-led critical information and participatory AI

The use of AI is currently being explored in partnership with NGOs in three countries to provide community-led critical information through digital tools, channels, and social media. The approach provides timely, trustworthy, and accurate information to allow people to make decisions on for example, where to get identity documents, and how to access health services.  A human-intensive model ensures the information that people get is trustworthy, but a team is exploring the use of generative AI to scale it up and create personalised and contextualised information for people. Work is underway to de-risk the prototype and ensure safety to test it, measure results and then take it to clients. All results will be published as part of a global public good.

“This is not the first moment we’ve been in a global challenge of what to do about an emerging technology – far from it.”

AI-driven chatbot for education in crises

Another example is an AI-driven chatbot platform that delivers personalized learning and education experiences to crisis-affected children which can operate at scale within 30 days of a crisis.  It offers a chatbot to reach them on platforms they already use such as WhatsApp, SMS texting, and social media. Launched in Nigeria, the approach is now integrating ChatGPT. In Syria, the use of AI with caregivers has delivered strong early childhood development outcomes.

“Collective Crisis Intelligence (CCI) and Participatory AI offer a pragmatic way to ensure that humanitarian AI reflects humanitarian values. Why aren’t we investing in more of this?”

Collective crisis intelligence and participatory AI

A further example of AI is Collective Crisis Intelligence (CCI) which combines the collective intelligence of crisis-affected populations and frontline responders. The organisation wanted to address the risks from AI by giving those local frontline responders and crisis-affected communities a major role in shaping the design, development, and evaluation of the AI tools, and called this approach Participatory AI.

Evaluations from experiences in Nepal and Cameroon showed that these CCI approaches have the potential to make local humanitarian action more timely and importantly more appropriate, and responsive to local needs.

Secondly, Participatory AI methods helped reduce some of the potential for bias and harm from AI systems from identifying model blind spots, to removing culturally sensitive inputs to the model, to specifying guardrails for when and where an AI tool could and should not be used. Using Participatory AI approaches helped build trust in the AI tool, a critical factor to the adoption and success of AI in any environment.  This was all achieved through using local data, local talent, and local infrastructure.

“We can influence the trajectory AI takes. We can develop AI that gives local communities agency and increases our accountability to them.”

Citizen voice and ownership for response at scale

Experience in India revealed how AI gained information from the roofs of houses in urban neighbourhoods, using the roofs as a type of QR code to identify who is vulnerable in a natural emergency.  An approach to gain citizens’ voices and intelligence for a better humanitarian response created a ‘disaster wallet’ at household level, with households reporting on risks and impacts of disasters. These profiles were maintained on a platform that records what kind of assistance households might need.

Some lessons learnt from this experience are that it is important to partner with the government to create a global public good with both the government and the citizens owning the data. The data is not open and there is no commercial benefit or ability for the private sector to use it for profit. Actionable insights are built according to people’s adaptive capacity, aiming to restore the agency of people. In terms of designing the approach, the team designed the system with what works at scale, rather than scaling what works.

“We use AI in the most boring ways possible but it has improved the way we work massively.”

Geospatial map data

In Kenya, an NGO is working to adapt to changes in AI and amplify local community knowledge at the same time.  It is focused on closing map data gaps, ensuring that lack of geospatial data is not a barrier to humanitarian response. Integration of road tracing, mobile mapping, and community voices aims to close the gaps. For example, the NGO is supporting the government in mapping a city using AI data, drone imagery mapping, and community validation to support government planning on drainage systems and disability access.  It is challenging to integrate local knowledge into AI approaches, and it is done according to context with some projects using AI for 10% of the work, and others 80%.  It is also hard to use AI in high conflict situations.

Predicting the likelihood of violence

In Kenya, another AI approach is being used to predict the likelihood of violence across three areas of the country.  The model looks at what factors are associated with change to a phenomenon.  Over what period, and with what actors? An inclusive approach and systematic engagement of lived experience in each context is crucial to the model and its predictive ability. It is important to innovate to capture and model frequently changing phenomena. The team tracked data over time for 56 towns and cities in Africa over 5 years, developed a model of depth, and triangulated this with a household survey. This made it possible to identify what phenomena are most associated with change such as conflict events, perception of conflict or change of commodity prices. This can then inform the project, its design, and activities.

In another example from a small island state, a team trained an AI model on the three main newspapers and how they reported on crime to identify and predict volumes of crime. This gave donors the level of confidence to act.

Monitoring displacement

A team in Switzerland monitors the displacement of populations through AI, which creates massive efficiencies when identifying needs in disasters. A team of data scientists use all their capacity on data cleaning and data management and applies AI to cut down research space. When disaster strikes, the team conducts rapid needs assessments with speedy processes and simple semantic searches, which gives a probabilistic return. The tool can be used within existing processes and gives huge returns in time efficiency. The team has also reclassified and recoded two decades of data to fit the new system, so there is historical depth to the model. Access to the data is available online with a more compatible system for larger agents.

“A better distinction needs to be made between scaling up what works versus what works at scale.”

Discussion

“AI can only figure out from the data – but if you as humans know what works, then you need to lean into that and forget about the hype.”

Key points included:

A common difficulty is how to move beyond the innovation and pilot projects that donors are readily willing to fund, in order to move to larger-scale efforts which meet vast humanitarian needs but require greater donor commitment.  Small-scale pilots are often not scalable in wider contexts. A better distinction needs to be made between scaling up what works versus what works at scale.

Suggestions included avoiding promoting only one tool to scale up, but rather thinking about contexts, systems, and building digital public infrastructures, open structures, and open hardware and software that allows lots of different organisations to take part, with the ability to tailor appropriately to local contexts.

Another suggestion to improve the pilot-to-scale pathway was to distribute the ability to solve, by creating a platform for other innovators and agencies to enter and contribute learning and solutions.

A further suggestion was to integrate AI into existing work, rather than creating a distinct project. One team identified a use case and hired staff to integrate it into their workflow. If something is repetitive and predictable, then AI is useful and appropriate. A good level of understanding of tools and capacities of how AI can be used in general and how people can apply it to their work will allow for better integration of AI, and agencies need to promote this internally. This approach might help move away from the ‘plague’ of pilot projects.

Guardrails, safety procedures, and rules of how AI manages itself were of concern, and questions were asked about the prevention of hallucinations and other potential harms. The tech sector might not be opposed to AI models developing in a range of ways, whereas the humanitarian sector needs clear requirements and analysis of needs and purpose, along with sturdy barriers in place to prevent AI from becoming harmful, especially to vulnerable people. Participants also expressed concerns over the privacy of data, and the balance between AI, human resources, and the moments when human interventions are required.

Previous

The humanitarian world in 2030

Next

Creating the enabling environment for safe AI uptake

Want to find out more?

Sign up to our newsletter